Audit
Anomaly Detection in Double-entry Bookkeeping Data by Federated Learning System with Non-model Sharing Approach
Mashiko, Sota, Kawamata, Yuji, Nakayama, Tomoru, Sakurai, Tetsuya, Okada, Yukihiko
Anomaly detection is crucial in financial auditing and effective detection often requires obtaining large volumes of data from multiple organizations. However, confidentiality concerns hinder data sharing among audit firms. Although the federated learning (FL)-based approach, FedAvg, has been proposed to address this challenge, its use of mutiple communication rounds increases its overhead, limiting its practicality. In this study, we propose a novel framework employing Data Collaboration (DC) analysis -- a non-model share-type FL method -- to streamline model training into a single communication round. Our method first encodes journal entry data via dimensionality reduction to obtain secure intermediate representations, then transforms them into collaboration representations for building an autoencoder that detects anomalies. We evaluate our approach on a synthetic dataset and real journal entry data from multiple organizations. The results show that our method not only outperforms single-organization baselines but also exceeds FedAvg in non-i.i.d. experiments on real journal entry data that closely mirror real-world conditions. By preserving data confidentiality and reducing iterative communication, this study addresses a key auditing challenge -- ensuring data confidentiality while integrating knowledge from multiple audit firms. Our findings represent a significant advance in artificial intelligence-driven auditing and underscore the potential of FL methods in high-security domains.
Auditing of AI: Legal, Ethical and Technical Approaches
AI auditing is a rapidly growing field of research and practice. This review article, which doubles as an editorial to Digital Societys topical collection on Auditing of AI, provides an overview of previous work in the field. Three key points emerge from the review. First, contemporary attempts to audit AI systems have much to learn from how audits have historically been structured and conducted in areas like financial accounting, safety engineering and the social sciences. Second, both policymakers and technology providers have an interest in promoting auditing as an AI governance mechanism. Academic researchers can thus fill an important role by studying the feasibility and effectiveness of different AI auditing procedures. Third, AI auditing is an inherently multidisciplinary undertaking, to which substantial contributions have been made by computer scientists and engineers as well as social scientists, philosophers, legal scholars and industry practitioners. Reflecting this diversity of perspectives, different approaches to AI auditing have different affordances and constraints. Specifically, a distinction can be made between technology-oriented audits, which focus on the properties and capabilities of AI systems, and process oriented audits, which focus on technology providers governance structures and quality management systems. The next step in the evolution of auditing as an AI governance mechanism, this article concludes, should be the interlinking of these available (and complementary) approaches into structured and holistic procedures to audit not only how AI systems are designed and used but also how they impact users, societies and the natural environment in applied settings over time.
Advancing AI Audits for Enhanced AI Governance
Ema, Arisa, Sato, Ryo, Hase, Tomoharu, Nakano, Masafumi, Kamimura, Shinji, Kitamura, Hiromu
As artificial intelligence (AI) is integrated into various services and systems in society, many companies and organizations have proposed AI principles, policies, and made the related commitments. Conversely, some have proposed the need for independent audits, arguing that the voluntary principles adopted by the developers and providers of AI services and systems insufficiently address risk. This policy recommendation summarizes the issues related to the auditing of AI services and systems and presents three recommendations for promoting AI auditing that contribute to sound AI governance. Recommendation1.Development of institutional design for AI audits. Recommendation2.Training human resources for AI audits. Recommendation3. Updating AI audits in accordance with technological progress. In this policy recommendation, AI is assumed to be that which recognizes and predicts data with the last chapter outlining how generative AI should be audited.
Risk-limiting Financial Audits via Weighted Sampling without Replacement
Shekhar, Shubhanshu, Xu, Ziyu, Lipton, Zachary C., Liang, Pierre J., Ramdas, Aaditya
We introduce the notion of a risk-limiting financial auditing (RLFA): given $N$ transactions, the goal is to estimate the total misstated monetary fraction~($m^*$) to a given accuracy $\epsilon$, with confidence $1-\delta$. We do this by constructing new confidence sequences (CSs) for the weighted average of $N$ unknown values, based on samples drawn without replacement according to a (randomized) weighted sampling scheme. Using the idea of importance weighting to construct test martingales, we first develop a framework to construct CSs for arbitrary sampling strategies. Next, we develop methods to improve the quality of CSs by incorporating side information about the unknown values associated with each item. We show that when the side information is sufficiently predictive, it can directly drive the sampling. Addressing the case where the accuracy is unknown a priori, we introduce a method that incorporates side information via control variates. Crucially, our construction is adaptive: if the side information is highly predictive of the unknown misstated amounts, then the benefits of incorporating it are significant; but if the side information is uncorrelated, our methods learn to ignore it. Our methods recover state-of-the-art bounds for the special case when the weights are equal, which has already found applications in election auditing. The harder weighted case solves our more challenging problem of AI-assisted financial auditing.
We're using AI more and more for hiring. How can we ensure it's fair?
It is important to consider how market incentives and governmental oversight affect algorithm development, because it is not obvious that best practices will otherwise prevail. In the absence of oversight or threat of litigation, there are good reasons to be skeptical that these employment models are typically rigorous in their approach to fairness. Most prominent of these reasons is that it is more expensive to take a thorough approach to fairness. It is time-consuming to task highly skilled data scientists and engineers with making robust and fair algorithmic processes, rather than building new features or delivering a model to a client. Further, it adds expense to collect more diverse and representative data to use in developing these models before deploying them.
Independent Auditor's Report
The Board of Directors American Association For Artificial Intelligence Menlo Park, California We have audited the statement of financial position of American Association for Artificial Intelligence as of December 31, 1996 and the related statements of activities, changes in net assets and cash flows for the year then ended. These financial statements are the responsibility of the Association's management. Our responsibility is to express an opinion on these financial statements based on our audits. We conducted our audits in accordance with generally accepted auditing standards. Those standards require that we plan and perform the audit to obtain reasonable assurance about whether the financial statements are free of material misstatement.
Cognitive Assistance for Automating the Analysis of the Federal Acquisition Regulations System
Saha, Srishty (University of Maryland, Baltimore County) | Joshi, Karuna P. (University of Maryland, Baltimore County) | Frank, Renee (University of Maryland, Baltimore County)
Government regulations are critical to understanding how to do business with a government entity and receive other benefits. However, government regulations are also notoriously long and organized in ways that can be confusing for novice users. Developing cognitive assistance tools that remove some of the burden from human users is of potential benefit to a variety of users. The volume of data found in United States federal government regulation suggests a multiple-step approach to process the data into machine-readable text, create an automated legal knowledge base capturing various facts and rules, and eventually building a legal question and answer system to acquire understanding from various regulations and provisions. Our work discussed in this paper represents our initial efforts to build a framework for Federal Acquisition Regulations System (Title 48, Code of Federal Regulations) in order to create an efficient legal knowledge base representing relationships between various legal elements, semantically similar terminologies, deontic expressions and cross-referenced legal facts and rules.
Scoping out the audit of the future
After revolutionizing tax and accounting over the course of decades, technology finally looks poised to reshape the third major service of the traditional accounting practice: the audit. Machine learning, data analytics, ever-more-powerful and mobile computers, and new tools like blockchain will do more than just change the way auditors do their job -- increasingly, they'll change what that job is. To get a glimpse of what the audit (and auditor) of the future will look like, Accounting Today convened a virtual roundtable of experts in the field. Sharing their thoughts on the future of auditing here are: Mark Baer, managing partner of the audit services group at Top 10 Firm Crowe Horwath; Frank Casal, vice chair of audit at Big Four firm KPMG; Cindy Fornelli, the executive director of the Center for Audit Quality; Joel Shamon, the national audit leader at Top Five Firm RSM US; and Jimmy Thompson, an audit partner at Texas-based MaloneBailey. Which trends -- whether technological, regulatory, economic or otherwise -- should auditors be paying the most attention to over the next five years? Casal: Audit professionals' work is fundamentally about "trust."
Innovation in Audit Takes Analytics, AI Route - Deloitte CIO - WSJ
The advent of audit analytics and cognitive technology does not mean the end of human auditors. It means an end to painstaking checking and crossfooting of debit and credit entries and the beginning of auditing careers that thrive on understanding, monitoring, and improving analytical and cognitive systems. I have worked for a couple decades with professional services firms that perform financial audits, but I have never done one--nor have I ever wanted to do one, to be honest. I'm not good with work that involves structured processes, details, and rigorous checking, and audits always seemed heavily infused with those kinds of tasks. Now, however, I am becoming quite interested in audits for two reasons.